security group
Automated Cloud Infrastructure-as-Code Reconciliation with AI Agents
Yang, Zhenning, Guan, Hui, Nicolet, Victor, Paulsen, Brandon, Dodds, Joey, Kroening, Daniel, Chen, Ang
Cloud infrastructure is managed through a mix of interfaces -- traditionally, cloud consoles, command-line interfaces (CLI), and SDKs are the tools of choice. Recently, Infrastructure-as-Code/IaC frameworks (e.g., Terraform) have quickly gained popularity. Unlike conventional tools, IaC~frameworks encode the infrastructure in a "source-of-truth" configuration. They are capable of automatically carrying out modifications to the cloud -- deploying, updating, or destroying resources -- to bring the actual infrastructure into alignment with the IaC configuration. However, when IaC is used alongside consoles, CLIs, or SDKs, it loses visibility into external changes, causing infrastructure drift, where the configuration becomes outdated, and later IaC operations may undo valid updates or trigger errors. We present NSync, an automated system for IaC reconciliation that propagates out-of-band changes back into the IaC program. Our key insight is that infrastructure changes eventually all occur via cloud API invocations -- the lowest layer for cloud management operations. NSync gleans insights from API traces to detect drift (i.e., non-IaC changes) and reconcile it (i.e., update the IaC configuration to capture the changes). It employs an agentic architecture that leverages LLMs to infer high-level intents from noisy API sequences, synthesize targeted IaC updates using specialized tools, and continually improve through a self-evolving knowledge base of past reconciliations. We further introduce a novel evaluation pipeline for injecting realistic drifts into cloud infrastructure and assessing reconciliation performance. Experiments across five real-world Terraform projects and 372 drift scenarios show that NSync outperforms the baseline both in terms of accuracy (from 0.71 to 0.97 pass@3) and token efficiency (1.47$\times$ improvement).
Run secure processing jobs using PySpark in Amazon SageMaker Pipelines
Amazon SageMaker Studio can help you build, train, debug, deploy, and monitor your models and manage your machine learning (ML) workflows. Amazon SageMaker Pipelines enables you to build a secure, scalable, and flexible MLOps platform within Studio. In this post, we explain how to run PySpark processing jobs within a pipeline. This enables anyone that wants to train a model using Pipelines to also preprocess training data, postprocess inference data, or evaluate models using PySpark. This capability is especially relevant when you need to process large-scale data.
Deploy ML models at the edge with Microk8s, Seldon and Istio
Edge computing is defined as solutions that move data processing at or near the point of data generation. This means that the results of Machine Learning model inference can be delivered to customers faster and create a real-time inference feeling. This is a perfect place for your models. Looking at Gartner's prediction: "Around 10% of enterprise-generated data is created and processed outside a traditional centralized data centre or cloud. By 2025, Gartner predicts this figure will reach 75%".
Accelerating PyTorch Transformers with Intel Sapphire Rapids - part 1
About a year ago, we showed you how to distribute the training of Hugging Face transformers on a cluster or third-generation Intel Xeon Scalable CPUs (aka Ice Lake). Recently, Intel has launched the fourth generation of Xeon CPUs, code-named Sapphire Rapids, with exciting new instructions that speed up operations commonly found in deep learning models. In this post, you will learn how to accelerate a PyTorch training job with a cluster of Sapphire Rapids servers running on AWS. We will use the Intel oneAPI Collective Communications Library (CCL) to distribute the job, and the Intel Extension for PyTorch (IPEX) library to automatically put the new CPU instructions to work. As both libraries are already integrated with the Hugging Face transformers library, we will be able to run our sample scripts out of the box without changing a line of code.
I Used ChatGPT to Create an Entire AI Application on AWS
Two days ago OpenAI released ChatGPT, a new language model which is an improved version of GPT-3 and, possibly, gives us a peek into what GPT-4 will be capable of when it is released early next year (as is rumoured). With ChatGPT it is possible to have actual conversation with the model, referring back to previous points in the conversation. I wanted to try out if I could use this model as a pair programmer that I can give some instructions and it produces the code for me. I would still double-check those code snippets, of course, but at least I won't have to write them from scratch anymore. So in this blog post I describe how I used ChatGPT to create a simple sentiment analysis application from scratch.
Searching For Semantic Similarity!
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. This blog post is all about what we look for while making friends!
Security Automation: Introduction and Benefits
We realize that Machine Learning is the most famous field in the present age, it is being acknowledged and executed in different enterprises particularly in medical care, ML is performing astoundingly in the clinical field by distinguishing malignant cells in patients at the soonest stage and foreseeing future pandemics or danger flare-up in earlier so the administration could take prudent techniques. Indeed, even to this point, the ML algorithm is helping mankind by contributing its potential in anticipating Covid defilement and in discovering antibodies against the infection. The entirety of this causes Machine Learning to be attractive because we realize that this innovation will be the essence of a future digitalized world. Security Automation is a part of ML which is one among every one of the things that cause us to accept a profession in ML will be emphatically gotten. Security computerization is demonstrating to be the following large thing, later on, after thinking about it you will acknowledge how much significant it is to turn into a piece of it.
- Health & Medicine (0.76)
- Information Technology > Security & Privacy (0.32)
Secure multi-account model deployment with Amazon SageMaker: Part 1
Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning (ML) that lets you build, train, debug, deploy, and monitor your ML models. Although Studio provides all the tools you need to take your models from experimentation to production, you need a robust and secure model deployment process. This process must fulfill your organization's operational and security requirements. Amazon SageMaker and Studio provide a wide range of specialized functionality for building highly secure, scalable, and flexible MLOps platforms to cover your model deployment use cases and requirements. Three SageMaker services, SageMaker Pipelines, SageMaker Projects, and SageMaker Model Registry, build a foundation to implement enterprise-grade secure multi-account model deployment workflow.
Securing Amazon SageMaker Studio internet traffic using AWS Network Firewall
Amazon SageMaker Studio is a web-based fully integrated development environment (IDE) where you can perform end-to-end machine learning (ML) development to prepare data and build, train, and deploy models. Like other AWS services, Studio supports a rich set of security-related features that allow you to build highly secure and compliant environments. One of these fundamental security features allows you to launch Studio in your own Amazon Virtual Private Cloud (Amazon VPC). This allows you to control, monitor, and inspect network traffic within and outside your VPC using standard AWS networking and security capabilities. For more information, see Securing Amazon SageMaker Studio connectivity using a private VPC.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Data Science (0.98)
Lead with DevSecOps to lower risk and raise value
Developing and deploying AI-powered systems and applications is a complex business, especially in our extended remote reality. You're likely facing an uphill climb and let's face it, huge risks. The way to clear the obstacles, lower the risks, and raise the value you deliver hinges on one essential element: implementing DevSecOps to protect your process and your assets. We're operating in a different world now where unity among development (Dev), security (Sec), and operations (Ops) has never been more essential. Compounded by pressure related to the fast need to convert many of our office infrastructure to meet the needs of our remote reality during the COVID-19 pandemic, the market for DevSecOps is projected to grow from 32% to 34% mid-decade.[i]